283 research outputs found

    Nonlinearity Mitigation in WDM Systems: Models, Strategies, and Achievable Rates

    Get PDF
    After reviewing models and mitigation strategies for interchannel nonlinear interference (NLI), we focus on the frequency-resolved logarithmic perturbation model to study the coherence properties of NLI. Based on this study, we devise an NLI mitigation strategy which exploits the synergic effect of phase and polarization noise compensation (PPN) and subcarrier multiplexing with symbol-rate optimization. This synergy persists even for high-order modulation alphabets and Gaussian symbols. A particle method for the computation of the resulting achievable information rate and spectral efficiency (SE) is presented and employed to lower-bound the channel capacity. The dependence of the SE on the link length, amplifier spacing, and presence or absence of inline dispersion compensation is studied. Single-polarization and dual-polarization scenarios with either independent or joint processing of the two polarizations are considered. Numerical results show that, in links with ideal distributed amplification, an SE gain of about 1 bit/s/Hz/polarization can be obtained (or, in alternative, the system reach can be doubled at a given SE) with respect to single-carrier systems without PPN mitigation. The gain is lower with lumped amplification, increases with the number of spans, decreases with the span length, and is further reduced by in-line dispersion compensation. For instance, considering a dispersion-unmanaged link with lumped amplification and an amplifier spacing of 60 km, the SE after 80 spans can be be increased from 4.5 to 4.8 bit/s/Hz/polarization, or the reach raised up to 100 spans (+25%) for a fixed SE.Comment: Submitted to Journal of Lightwave Technolog

    Digital signal processing for compensating fiber nonlinearities

    Get PDF
    Successful compensation of nonlinear distortions due to fiber Kerr nonlinearities relies on the availability of an accurate channel model. Some models obtained by approximate solutions of the nonlinear Schrödinger equation and the backpropagation method are taken into account. It turns out that backpropagation is not the optimal processing technique and in some cases is outperformed by simpler processing techniques

    Stochastic Digital Backpropagation with Residual Memory Compensation

    Full text link
    Stochastic digital backpropagation (SDBP) is an extension of digital backpropagation (DBP) and is based on the maximum a posteriori principle. SDBP takes into account noise from the optical amplifiers in addition to handling deterministic linear and nonlinear impairments. The decisions in SDBP are taken on a symbol-by-symbol (SBS) basis, ignoring any residual memory, which may be present due to non-optimal processing in SDBP. In this paper, we extend SDBP to account for memory between symbols. In particular, two different methods are proposed: a Viterbi algorithm (VA) and a decision directed approach. Symbol error rate (SER) for memory-based SDBP is significantly lower than the previously proposed SBS-SDBP. For inline dispersion-managed links, the VA-SDBP has up to 10 and 14 times lower SER than DBP for QPSK and 16-QAM, respectively.Comment: 7 pages, accepted to publication in 'Journal of Lightwave Technology (JLT)

    On the Use of Factor Graphs in Optical Communications

    Get PDF
    Factor graphs and message passing allow the near-automated development of algorithms in many engineering disciplines, including digital communications. This paper gives an overview of their possible use in optical communications

    Memory-aware end-to-end learning of channel distortions in optical coherent communications

    Get PDF
    We implement a new variant of the end-to-end learning approach for the performance improvement of an optical coherent-detection communication system. The proposed solution enables learning the joint probabilistic and geometric shaping of symbol sequences by using auxiliary channel model based on the perturbation theory and the refined symbol probabilities training procedure. Due to its structure, the auxiliary channel model based on the first order perturbation theory expansions allows us performing an efficient parallelizable model application, while, simultaneously, producing a remarkably accurate channel approximation. The learnt multi-symbol joint probabilistic and geometric shaping demonstrates a considerable bit-wise mutual information gain of 0.47 bits/2D-symbol over the conventional Maxwell-Boltzmann shaping for a single-channel 64 GBd transmission through the 170 km single-mode fiber link

    CSpritz: accurate prediction of protein disorder segments with annotation for homology, secondary structure and linear motifs

    Get PDF
    CSpritz is a web server for the prediction of intrinsic protein disorder. It is a combination of previous Spritz with two novel orthogonal systems developed by our group (Punch and ESpritz). Punch is based on sequence and structural templates trained with support vector machines. ESpritz is an efficient single sequence method based on bidirectional recursive neural networks. Spritz was extended to filter predictions based on structural homologues. After extensive testing, predictions are combined by averaging their probabilities. The CSpritz website can elaborate single or multiple predictions for either short or long disorder. The server provides a global output page, for download and simultaneous statistics of all predictions. Links are provided to each individual protein where the amino acid sequence and disorder prediction are displayed along with statistics for the individual protein. As a novel feature, CSpritz provides information about structural homologues as well as secondary structure and short functional linear motifs in each disordered segment. Benchmarking was performed on the very recent CASP9 data, where CSpritz would have ranked consistently well with a Sw measure of 49.27 and AUC of 0.828. The server, together with help and methods pages including examples, are freely available at URL: http://protein.bio.unipd.it/cspritz/

    Highlights from the Pierre Auger Observatory

    Full text link
    The Pierre Auger Observatory is the world's largest cosmic ray observatory. Our current exposure reaches nearly 40,000 km2^2 str and provides us with an unprecedented quality data set. The performance and stability of the detectors and their enhancements are described. Data analyses have led to a number of major breakthroughs. Among these we discuss the energy spectrum and the searches for large-scale anisotropies. We present analyses of our Xmax_{max} data and show how it can be interpreted in terms of mass composition. We also describe some new analyses that extract mass sensitive parameters from the 100% duty cycle SD data. A coherent interpretation of all these recent results opens new directions. The consequences regarding the cosmic ray composition and the properties of UHECR sources are briefly discussed.Comment: 9 pages, 12 figures, talk given at the 33rd International Cosmic Ray Conference, Rio de Janeiro 201

    Measurement of the Depth of Maximum of Extensive Air Showers above 10^18 eV

    Get PDF
    We describe the measurement of the depth of maximum, Xmax, of the longitudinal development of air showers induced by cosmic rays. Almost four thousand events above 10^18 eV observed by the fluorescence detector of the Pierre Auger Observatory in coincidence with at least one surface detector station are selected for the analysis. The average shower maximum was found to evolve with energy at a rate of (106 +35/-21) g/cm^2/decade below 10^(18.24 +/- 0.05) eV and (24 +/- 3) g/cm^2/decade above this energy. The measured shower-to-shower fluctuations decrease from about 55 to 26 g/cm^2. The interpretation of these results in terms of the cosmic ray mass composition is briefly discussed.Comment: Accepted for publication by PR

    The exposure of the hybrid detector of the Pierre Auger Observatory

    Get PDF
    The Pierre Auger Observatory is a detector for ultra-high energy cosmic rays. It consists of a surface array to measure secondary particles at ground level and a fluorescence detector to measure the development of air showers in the atmosphere above the array. The "hybrid" detection mode combines the information from the two subsystems. We describe the determination of the hybrid exposure for events observed by the fluorescence telescopes in coincidence with at least one water-Cherenkov detector of the surface array. A detailed knowledge of the time dependence of the detection operations is crucial for an accurate evaluation of the exposure. We discuss the relevance of monitoring data collected during operations, such as the status of the fluorescence detector, background light and atmospheric conditions, that are used in both simulation and reconstruction.Comment: Paper accepted by Astroparticle Physic

    A search for point sources of EeV photons

    Full text link
    Measurements of air showers made using the hybrid technique developed with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for point sources of EeV photons anywhere in the exposed sky. A multivariate analysis reduces the background of hadronic cosmic rays. The search is sensitive to a declination band from -85{\deg} to +20{\deg}, in an energy range from 10^17.3 eV to 10^18.5 eV. No photon point source has been detected. An upper limit on the photon flux has been derived for every direction. The mean value of the energy flux limit that results from this, assuming a photon spectral index of -2, is 0.06 eV cm^-2 s^-1, and no celestial direction exceeds 0.25 eV cm^-2 s^-1. These upper limits constrain scenarios in which EeV cosmic ray protons are emitted by non-transient sources in the Galaxy.Comment: 28 pages, 10 figures, accepted for publication in The Astrophysical Journa
    corecore